Traffic Signal Control: a Double Q-learning Approach

نویسندگان

چکیده

Currently, the use of information and communication technologies for solving economic, social, transportation, other problems in urban environment is usually considered within smart city concept. Optimal traffic management and, particular, signal control one key components cities. In this paper, we investigate reinforcement learning approach, namely, double Q-learning to solve problem. Both initial data on connected vehicles distribution aggregated characteristics flows are used describe state agent. Experimental studies proposed model were carried out synthetic real using CityFlow microscopic simulator.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A model predictive control approach for decentralized traffic signal control

In this paper, a decentralized control is utilized for the traffic signal control problem using model predictive control. A point-queue model is used with red-green signal transition times. A traffic signal controller is designed to minimize the queue lengths with information from the adjacent intersections. After the signal controller gathers the needed information from the adjacent intersecti...

متن کامل

Reinforcement Learning For Adaptive Traffic Signal Control

By 2050, two-thirds of the world’s 9.6 billion people will live in urban areas [2]. In many cities, opportunities to expand urban road networks are limited, so existing roads will need to more efficiently accommodate higher volumes of traffic. Consequently, there is a pressing need for technologically viable, low-cost solutions that can work with existing infrastructure to help alleviate increa...

متن کامل

Predictive Q-Routing: A Memory-based Reinforcement Learning Approach to Adaptive Traffic Control

In this paper, we propose a memory-based Q-Iearning algorithm called predictive Q-routing (PQ-routing) for adaptive traffic control. We attempt to address two problems encountered in Q-routing (Boyan & Littman, 1994), namely, the inability to fine-tune routing policies under low network load and the inability to learn new optimal policies under decreasing load conditions. Unlike other memory-ba...

متن کامل

P14: Anxiety Control Using Q-Learning

Anxiety disorders are the most common reasons for referring to specialized clinics. If the response to stress changed, anxiety can be greatly controlled. The most obvious effect of stress occurs on circulatory system especially through sweating. the electrical conductivity of skin or in other words Galvanic Skin Response (GSR) which is dependent on stress level is used; beside this parameter pe...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Computer Science and Information Systems (FedCSIS), 2019 Federated Conference on

سال: 2021

ISSN: ['2300-5963']

DOI: https://doi.org/10.15439/2021f109